-
Notifications
You must be signed in to change notification settings - Fork 31
Enable Pipeline Parallelism on jax worker #1043
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
DescriptionStart with a short description of what the PR does and how this is a change from The rest of the description includes relevant details and context, examples:
If the change fixes a bug or a Github issue, please include a link, e.g.,: TestsPlease describe how you tested this change, and include any instructions and/or ChecklistBefore submitting this PR, please make sure:
|
2acca6f to
e066aa3
Compare
Signed-off-by: Chenyaaang <[email protected]>
e066aa3 to
4cac52f
Compare
| self.step_counter += 1 | ||
| return None | ||
| else: | ||
| self.step_counter += 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why do we need this step_counter?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's used to generate uuid, we want it to be unique between each run and each worker, so we hash scheduler_output, step and worker_rank.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Better explain this in line 125
| multihost_backend = os.environ.get("TPU_MULTIHOST_BACKEND", "").lower() | ||
| if multihost_backend != "ray" and self.parallel_config.pipeline_parallel_size > 1: | ||
| # Note: Below is the setting for v6e8 host (8 chips of v6e) | ||
| # There are 2 ways of subslicing a v6e: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we need report errors if the settings are not in these 2 ways?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I use v6e8 as an example to provide 2 ways to subslice the chips. I was thinking if the customer is using other chips, they should replace with their own topology. Do you have any better idea to interpret this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can the topology be passed as config variables (and make the default as one of v6e's supported topology) or at least a parameter of init_device() so people would find the needed changes more easily? And please move lines 136-141 to line 152.
Signed-off-by: Chenyaaang <[email protected]>
|
@sixiang-google Hi Xiang, can you help me take a look at |
| # For PP, we use MPMD so we want to profile every worker. | ||
| if self.pp_world_size > 1 and envs.VLLM_TORCH_PROFILER_DIR: | ||
| self.profile_dir = os.path.join(envs.VLLM_TORCH_PROFILER_DIR, | ||
| f"rank_{self.rank}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not sure the convention here but this might be more informative? f"rank_{self.rank}_{self.pp_world_size}"
|
|
||
| assert jax.local_device_count( | ||
| ) >= sharding_config.total_devices | ||
| self.devices = jax.local_devices()[:sharding_config. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: combine line 183 and 184 to improve the readability?
| self.rank, self.rank == 0, | ||
| self.rank == self.pp_world_size - 1) | ||
| logger.info(f"Init worker | " | ||
| f"rank={self.rank} | " |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add world_size as well?
yixinshi
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good work! A general comment: shall we have more specific PR title here for each PR?
Description
The implementation for Pipeline Parallelism are splitted into the following small PRs.
This PR is to modify Jax worker to support PP.
__init__takes in the current worker's IP and its previous worker's IP, to start transfer server and connection later.execute_model, for the PP workers who not in the first rank, they need to receive intermediate tensor from the previous worker. For the PP workers who is not the last rank, they need to send intermediate tensor to their next worker.Tests
E2E test has verified the whole PP implementation works properly.
Checklist
Before submitting this PR, please make sure: